# Cross-lingual understanding
Armenian Text Embeddings 1
Armenian text embedding model optimized based on multilingual-e5-base, supporting semantic search and cross-lingual understanding
Text Embedding
Transformers Supports Multiple Languages

A
Metric-AI
578
18
Fillmaskmodel
MIT
A fill-mask model fine-tuned based on xlm-roberta-base for predicting masked text segments
Large Language Model
Transformers

F
Okyx
46
1
XLM RoBERTa Xtreme En
MIT
A tagging classification model fine-tuned on the xtreme_en dataset based on xlm-roberta-base, supporting multilingual text processing.
Sequence Labeling
Transformers

X
arize-ai
5
0
Bloom 3b
Openrail
BLOOM is a multilingual large language model supporting 46 languages, focusing on text generation tasks.
Large Language Model
Transformers Supports Multiple Languages

B
bigscience
11.82k
92
Xlm Roberta Base Finetuned Marc
MIT
A text classification model fine-tuned on the Amazon multilingual comment dataset based on xlm-roberta-base
Text Classification
Transformers

X
begar
9
0
Tf Xlm Roberta Base
XLM-RoBERTa is an extended version of a cross-lingual sentence encoder, trained on 2.5T of data across 100 languages, achieving excellent performance in multiple cross-lingual benchmarks.
Large Language Model
Transformers

T
jplu
4,820
1
Tf Xlm Roberta Large
XLM-RoBERTa is a large-scale cross-lingual sentence encoder, trained on 2.5TB of data across 100 languages, achieving excellent performance in multiple cross-lingual benchmarks.
Large Language Model
Transformers

T
jplu
236
1
Bert Base Multilingual Uncased Finetuned
A fine-tuned model based on bert-base-multilingual-uncased, specific purpose not clearly stated
Large Language Model
Transformers

B
am-shb
16
0
Mt5 Large Finetuned Mnli Xtreme Xnli
Apache-2.0
Fine-tuned on multilingual T5 model, specifically designed for zero-shot text classification tasks, supporting 15 languages
Large Language Model
Transformers Supports Multiple Languages

M
alan-turing-institute
964
13
Cino Base V2
Apache-2.0
CINO is a multilingual pre-trained model designed for Chinese minority languages, supporting Chinese and 7 minority languages, built on the XLM-R framework.
Large Language Model
Transformers Supports Multiple Languages

C
hfl
156
5
Xlm Mlm 17 1280
The XLM model is a cross-lingual pretrained model based on text in 17 languages, using the masked language modeling (MLM) objective
Large Language Model
Transformers Supports Multiple Languages

X
FacebookAI
201
2
Bert Base En Fr Nl Ru Ar Cased
Apache-2.0
This is a streamlined version of bert-base-multilingual-cased, specifically supporting English, French, Dutch, Russian, and Arabic while maintaining the original model's accuracy.
Large Language Model Other
B
Geotrend
22
0
Distilbert Base 25lang Cased
Apache-2.0
This is a compact version of distilbert-base-multilingual-cased, supporting 25 languages while maintaining the original model's accuracy.
Large Language Model
Transformers Supports Multiple Languages

D
Geotrend
80
1
Featured Recommended AI Models